Add STDIO MCP documentation and update navigation menu#472
Add STDIO MCP documentation and update navigation menu#472MervinPraison merged 1 commit intomainfrom
Conversation
|
Caution Review failedThe pull request is closed. WalkthroughA new documentation page describing the integration of STDIO with PraisonAI agents via the MCP framework has been introduced. The documentation includes setup guides, code examples, usage with multiple LLMs, and UI integration. The navigation configuration was updated to include this new page within the MCP documentation section. Changes
Sequence Diagram(s)sequenceDiagram
participant User
participant ClientScript
participant MCPServer
participant AI_Agent
User->>ClientScript: Provide input/query
ClientScript->>MCPServer: Send request via STDIO MCP tool
MCPServer->>AI_Agent: Process request (e.g., calculator operation)
AI_Agent-->>MCPServer: Return result
MCPServer-->>ClientScript: Send result via STDIO
ClientScript-->>User: Display output
Poem
Tip β‘οΈ Faster reviews with caching
Enjoy the performance boostβyour workflow just got faster. π Recent review detailsConfiguration used: CodeRabbit UI π Files selected for processing (2)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. πͺ§ TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Hello @MervinPraison, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!
Summary of Changes
This pull request adds documentation for integrating Standard Input/Output (STDIO) with PraisonAI agents using MCP (Message Communication Protocol). It includes a guide with code examples for setting up a calculator agent that uses STDIO MCP for communication. The documentation also covers alternative LLM integrations (Groq, Ollama) and a Gradio UI integration. Additionally, the pull request updates the navigation menu to include the new STDIO MCP documentation.
Highlights
- Documentation: Adds a new documentation page for MCP STDIO integration, providing a guide and code examples.
- Quick Start Guide: Includes a quick start guide with steps to create a client file, set up an STDIO MCP server, install dependencies, and run the agent.
- LLM Integrations: Demonstrates alternative LLM integrations with Groq and Ollama using STDIO.
- Gradio UI: Provides an example of integrating STDIO MCP with a Gradio UI for a calculator service.
- Navigation Update: Updates the
mint.jsonfile to include the new STDIO MCP documentation in the navigation menu.
Changelog
- docs/mcp/stdio.mdx
- Added a new documentation file for STDIO MCP integration.
- Included a quick start guide with code examples for a calculator agent.
- Demonstrated alternative LLM integrations with Groq and Ollama.
- Provided an example of integrating STDIO MCP with a Gradio UI.
- Added a features section highlighting the benefits of STDIO MCP integration.
- docs/mint.json
- Updated the navigation menu to include the new STDIO MCP documentation.
Using Gemini Code Assist
The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.
Invoking Gemini
You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.
| Feature | Command | Description |
|---|---|---|
| Code Review | /gemini review |
Performs a code review for the current pull request in its current state. |
| Pull Request Summary | /gemini summary |
Provides a summary of the current pull request in its current state. |
| Comment | @gemini-code-assist | Responds in comments when explicitly tagged, both in issue comments and review comments. |
| Help | /gemini help |
Displays a list of available commands. |
Customization
To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.
Limitations & Feedback
Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with π and π on @gemini-code-assist comments to provide feedback.
You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.
Through pipes and streams, data does flow,
From agent's mind to tools below.
With STDIO's grace,
A simple interface,
AI's power starts to grow.
Footnotes
-
Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution. β©
|
Qodo Merge was enabled for this repository. To continue using it, please link your Git account with your Qodo account here. PR Reviewer Guide πHere are some key observations to aid the review process:
|
|
Qodo Merge was enabled for this repository. To continue using it, please link your Git account with your Qodo account here. PR Code Suggestions β¨Explore these optional code suggestions:
|
||||||||||||
There was a problem hiding this comment.
Code Review
This pull request introduces comprehensive documentation for integrating PraisonAI agents with MCP using the STDIO transport. The documentation is well-structured, providing a clear quick start guide, alternative LLM examples, and a Gradio UI integration example. The addition to the navigation menu is also correctly implemented.
The quick start steps and the explanation of the server setup are clear and easy to follow. The inclusion of examples for different LLMs is helpful.
I've reviewed the changes focusing on correctness, efficiency, and maintainability, adhering to common Python and Markdown/JSON practices as no specific style guide was provided. Overall, this is a valuable addition to the documentation.
Summary of Findings
- Inefficient Agent/MCP initialization in Gradio example: The
AgentandMCPinstances are created inside the Gradiocalculatefunction, leading to a new process being spawned for every user query. This is inefficient and should be moved outside the function for single initialization. - Logging in server example: The
calculator_server.pyexample logs to a file (calculator_server.log). For a quick start example running via STDIO, logging to standard error might be more immediately visible and useful for debugging. - API Key Management: The documentation suggests exporting the API key as an environment variable. While simple for a quick start, mentioning more secure methods like
.envfiles or secrets management for real applications could be beneficial. - Ollama prompt explicitness: The Ollama example prompt explicitly tells the agent to 'Use the add tool with parameters a and b.' While this demonstrates guiding the LLM, it might not be necessary depending on the LLM's tool-calling capabilities and could potentially be simplified.
Merge Readiness
The documentation is a great addition and the navigation update is correct. However, the Gradio example contains a significant inefficiency issue that should be corrected before merging to prevent users from adopting a poor pattern. Once the high-severity issue is addressed, the pull request should be ready to merge. I am unable to approve this pull request directly; please have another reviewer approve it after the necessary changes are made.
| def calculate(query): | ||
| calculator_agent = Agent( | ||
| instructions="""You are a calculator agent that can perform basic arithmetic operations.""", | ||
| llm="gpt-4o-mini", | ||
| tools=MCP("python calculator_server.py") | ||
| ) |
There was a problem hiding this comment.
In this Gradio example, the Agent and MCP instances are created inside the calculate function. This means a new Agent and a new MCP server process will be started for every request to the Gradio interface. This is highly inefficient and will lead to significant overhead and resource consumption.
Could you move the Agent and MCP initialization outside the calculate function so they are created only once when the Gradio app starts?
def calculate(query):
# Use the pre-initialized agent
result = calculator_agent.start(query)
return f"## Calculation Result\n\n{result}"
# Initialize Agent and MCP server once outside the function
calculator_agent = Agent(
instructions="""You are a calculator agent that can perform basic arithmetic operations.""",
llm="gpt-4o-mini",
tools=MCP("python calculator_server.py")
)
β Deploy Preview for praisonai ready!
To edit notification comments on pull requests, go to your Netlify site configuration. |
Add STDIO MCP documentation and update navigation menu
PR Type
Documentation
Description
Add comprehensive documentation for MCP STDIO integration.
Provide step-by-step quick start and code examples.
Update navigation menu to include new STDIO documentation.
Changes walkthrough π
stdio.mdx
Add detailed MCP STDIO integration documentationΒ Β Β Β Β Β Β Β Βdocs/mcp/stdio.mdx
agents.
mint.json
Update navigation to include MCP STDIO docsΒ Β Β Β Β Β Β Β Β Β Β Β Β Βdocs/mint.json
mcp/stdioto the MCP documentation navigation group.Summary by CodeRabbit